منابع مشابه
GSNs : Generative Stochastic Networks
We introduce a novel training principle for generative probabilistic models that is an alternative to maximum likelihood. The proposed Generative Stochastic Networks (GSN) framework generalizes Denoising Auto-Encoders (DAE) and is based on learning the transition operator of a Markov chain whose stationary distribution estimates the data distribution. The transition distribution is a conditiona...
متن کاملDeep Generative Stochastic Networks Trainable by Backprop
We introduce a novel training principle for probabilistic models that is an alternative to maximum likelihood. The proposed Generative Stochastic Networks (GSN) framework is based on learning the transition operator of a Markov chain whose stationary distribution estimates the data distribution. The transition distribution of the Markov chain is conditional on the previous state, generally invo...
متن کاملVariational Generative Stochastic Networks with Collaborative Shaping
We develop an approach to training generative models based on unrolling a variational autoencoder into a Markov chain, and shaping the chain’s trajectories using a technique inspired by recent work in Approximate Bayesian computation. We show that the global minimizer of the resulting objective is achieved when the generative model reproduces the target distribution. To allow finer control over...
متن کاملMultimodal Transitions for Generative Stochastic Networks
Generative Stochastic Networks (GSNs) have been recently introduced as an alternative to traditional probabilistic modeling: instead of parametrizing the data distribution directly, one parametrizes a transition operator for a Markov chain whose stationary distribution is an estimator of the data generating distribution. The result of training is therefore a machine that generates samples throu...
متن کاملSupplemental Material for: Deep Generative Stochastic Networks Trainable by Backprop
Let Pθn(X|X̃) be a denoising auto-encoder that has been trained on n training examples. Pθn(X|X̃) assigns a probability to X , given X̃ , when X̃ ∼ C(X̃|X). This estimator defines a Markov chain Tn obtained by sampling alternatively an X̃ from C(X̃|X) and an X from Pθ(X|X̃). Let πn be the asymptotic distribution of the chain defined by Tn, if it exists. The following theorem is proven by Bengio et al. ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Information and Inference
سال: 2016
ISSN: 2049-8764,2049-8772
DOI: 10.1093/imaiai/iaw003